home *** CD-ROM | disk | FTP | other *** search
- Path: nntp.hut.fi!usenet
- From: oahvenla@hyppynaru.cs.hut.fi (Osma Ahvenlampi)
- Newsgroups: comp.sys.amiga.programmer
- Subject: Re: GetSysTime() accuracy
- Date: 14 Apr 1996 02:26:02 +0300
- Organization: What, me, organised?
- Sender: oahvenla@hyppynaru.cs.hut.fi
- Distribution: inet
- Message-ID: <jdj91fzso39.fsf@hyppynaru.cs.hut.fi>
- References: <4khkab$7p2@toad.stack.urc.tue.nl>
- NNTP-Posting-Host: hyppynaru.cs.hut.fi
- Mime-Version: 1.0
- Content-Type: text/plain; charset=US-ASCII
- In-reply-to: jaco@stack.urc.tue.nl's message of 11 Apr 1996 02:38:35 +0200
- X-Newsreader: Gnus v5.1
-
- In article <4khkab$7p2@toad.stack.urc.tue.nl> jaco@stack.urc.tue.nl (Jaco Schoonen) writes:
- >I was playing with GetSysTime() from the timer.device today and I was
- >wondering about something:
- >If I call it in a loop, the first value for tv.micros will always end with
- >0000 or 0001. Then it get's 50-250 times increases by 1 and then suddenly
- >skips about 0.02 seconds!
-
- The precision of GetSysTime() is 1 microsecond, although the accuracy
- may be off by several orders of magnitude, depending on how well
- adjusted the CIA timers and the clock frequency of the machine really
- are.
-
- You forgot to take into account that if you make a loop such as:
-
- for (i = 0 ; i < 1000 ; i++)
- {
- GetSysTime(&tv);
- Printf("%ld.%06.6ld\n", tv.tv_secs, tv.tv_micro);
- }
-
- the calls to GetSysTime() will actually be synchronised to the CIA
- clock itself because of the task switches you cause by doing an I/O
- operation. That's why the results are milliseconds apart, and pretty
- well tied to even thousands.
-
- If you make a tight loop that only calls GetSysTime() several times in
- a row, you will see that the results only differ by some odd
- microseconds. As it happens, on an A3000 the difference will average
- to 1 microsecond, but this is directly proportionate to the CPU speed,
- of course.
-
- However, when I tried a loop that called GetSysTime() and SubTime()
- 20000 times within Forbid(), I still got 20 millisecond delays after
- each 500 or so calls. I can't explain this immediately, but do note
- that 20 milliseconds is 1/50th of a second, ie. one CIA generated
- low-resolution "tick" (as defined by DOS). Maybe GetSysTime() forces a
- task switch sometimes? I don't know.. Anyway, the normal Exec maximum
- task quantum is 4 ticks, or 80 milliseconds, if this has something to
- do with the issue.
-
- I don't think you will gain any significant increase of accuracy by
- using the CIA directly. GetSysTime() is about as low-level a call as I
- can imagine.
-
- --
- Preposterous, adj. The idea that murder is a crime.
- | "Osma Ahvenlampi" <mailto:oa@iki.fi> <http://www.iki.fi/oa/> |
- | Amiga&BeBox&ClassAct&Voodoo&ARTech cool stuff: I-Net225&AWeb |
- --
-